168 research outputs found

    Agile Ways of Working: A Team Maturity Perspective

    Full text link
    With the agile approach to managing software development projects comes an increased dependability on well functioning teams, since many of the practices are built on teamwork. The objective of this study was to investigate if, and how, team development from a group psychological perspective is related to some work practices of agile teams. Data were collected from 34 agile teams (200 individuals) from six software development organizations and one university in both Brazil and Sweden using the Group Development Questionnaire (Scale IV) and the Perceptive Agile Measurement (PAM). The result indicates a strong correlation between levels of group maturity and the two agile practices \emph{iterative development} and \emph{retrospectives}. We, therefore, conclude that agile teams at different group development stages adopt parts of team agility differently, thus confirming previous studies but with more data and by investigating concrete and applied agile practices. We thereby add evidence to the hypothesis that an agile implementation and management of agile projects need to be adapted to the group maturity levels of the agile teams

    A tool for obtaining information on DTN traces

    Get PDF
    International audienceThe applications for dynamic networks are growing every day, and thus, so is the number of studies on them. An important part of such studies is the generation of results through simulation and comparison with other works. We implemented a tool to generate information on a given network trace, obtained by building its corresponding evolving graph. This information is useful to help researchers choose the most suitable trace for their work, to interpret the results correctly and to compare data from their work to the optimal results in the network. In this work, we present the implementation of the DTNTES tool which provides the aforementioned services and use the system to evaluate the DieselNet trace

    A Delay-Tolerant Network Routing Algorithm Based on Column Generation

    Get PDF
    International audienceDelay-Tolerant Networks (DTN) model systems that are characterized by intermittent connectivity and frequent partitioning. Routing in DTNs has drawn much research effort recently. Since very different kinds of networks fall in the DTN category, many routing approaches have been proposed. In particular, the routing layer in some DTNs have information about the schedules of contacts between nodes and about data traffic demand. Such systems can benefit from a previously proposed routing algorithm based on linear programming that minimizes the average message delay. This algorithm, however, is known to have performance issues that limit its applicability to very simple scenarios. In this work, we propose an alternative linear programming approach for routing in Delay-Tolerant Networks. We show that our formulation is equivalent to that presented in a seminal work in this area, but it contains fewer LP constraints and has a structure suitable to the application of Column Generation (CG). Simulation shows that our CG implementation arrives at an optimal solution up to three orders of magnitude faster than the original linear program in the considered DTN examples

    Adding tightly-integrated task scheduling acceleration to a RISC-V multi-core processor

    Get PDF
    Task Parallelism is a parallel programming model that provides code annotation constructs to outline tasks and describe how their pointer parameters are accessed so that they might be executed in parallel, and asynchronously, by a runtime capable of inferring and honoring their data dependence relationships. It is supported by several parallelization frameworks, as OpenMP and StarSs. Overhead related to automatic dependence inference and to the scheduling of ready-to-run tasks is a major performance limiting factor of Task Parallel systems. To amortize this overhead, programmers usually trade the higher parallelism that could be leveraged from finer-grained work partitions for the higher runtime-efficiency of coarser-grained work partitions. Such problems are even more severe for systems with many cores, as the task spawning frequency required for preserving cores from starvation grows linearly with their number. To mitigate these problems, researchers have designed hardware accelerators to improve runtime performance. Nevertheless, the high CPU-accelerator communication overheads of these solutions hampered their gains. We thus propose a RISC-V based architecture that minimizes communication overhead between the HW Task Scheduler and the CPU by allowing Task Scheduling software to directly interact with the former through custom instructions. Empirical evaluation of the architecture is made possible by an FPGA prototype featuring an eight-core Linux-capable Rocket Chip implementing such instructions. To evaluate the prototype performance, we both (1) adapted Nanos, a mature Task Scheduling runtime, to benefit from the new task-scheduling-accelerating instructions; and (2) developed Phentos, a new HW-accelerated light weight Task Scheduling runtime. Our experiments show that task parallel programs using Nanos-RV --- the Nanos version ported to our system --- are on average 2.13 times faster than those being serviced by baseline Nanos, while programs running on Phentos are 13.19 times faster, considering geometric means. Using eight cores, Nanos-RV is able to deliver speedups with respect to serial execution of up to 5.62 times, while Phentos produces speedups of up to 5.72 times.This work was supported by the Spanish Government (projects SEV-2015-0493 and TIN2015-65316-P), the Generalitat de Catalunya (2017-SGR-1414 and 2017-SGR1328), FAPESP (grants 2017/02682-2, 2018/00687-0, and 2014/25694-8), CNPq (grant 408782/2016-1), and CAPES.Peer ReviewedPostprint (author's final draft

    Da ciência à e-ciência: paradigmas da descoberta do conhecimento

    Get PDF
    Gradualmente, a computação está deixando de ser apenas uma “ferramenta de apoio” a novas pesquisas para se tornar parte fundamental das ciências com que interage e de seus métodos científicos. A sinergia entre ciência da computação e as outras áreas do conhecimento criou um novo modo de se fazer ciência – a e-science (ou e-ciência) – que unifica teoria, experimentos e simulação, ao mesmo tempo em que lida com uma quantidade enorme de informação. O uso de computação em nuvem tem o potencial de permitir que pesquisas antes restritas àqueles com acesso a supercomputadores possam ser realizadas por qualquer pesquisador. Este artigo apresenta uma breve descrição da evolução dos paradigmas do modo de se fazer ciência (do empirismo ao panorama atual da e-science) e aborda o potencial da computação em nuvem como ferramenta catalisadora de pesquisa transformativa.Computer Science is gradually evolving from a mere “supporting tool” for research in other fields and turning into an intrinsic part of the very methods of the sciences with which it interacts. The synergy between Computer Science and other fields of knowledge created a novel way of doing science – called eScience – which unifies theory, experiments, and simulations, enabling researchers to deal with huge amounts of information. The use of cloud computing has the potential to allow any researcher to conduct works previously restricted to those with access to supercomputers. This article presents a brief history of the evolution of scientific paradigms (from empiricism to the current landscape of eScience) and discusses the potential of cloud computing as a tool capable of catalyzing transformative research
    corecore